258 research outputs found
Training with Mixed-Precision Floating-Point Assignments
When training deep neural networks, keeping all tensors in high precision
(e.g., 32-bit or even 16-bit floats) is often wasteful. However, keeping all
tensors in low precision (e.g., 8-bit floats) can lead to unacceptable accuracy
loss. Hence, it is important to use a precision assignment -- a mapping from
all tensors (arising in training) to precision levels (high or low) -- that
keeps most of the tensors in low precision and leads to sufficiently accurate
models. We provide a technique that explores this memory-accuracy tradeoff by
generating precision assignments for convolutional neural networks that (i) use
less memory and (ii) lead to more accurate convolutional networks at the same
time, compared to the precision assignments considered by prior work in
low-precision floating-point training. We evaluate our technique on image
classification tasks by training convolutional networks on CIFAR-10, CIFAR-100,
and ImageNet. Our method typically provides > 2x memory reduction over a
baseline precision assignment while preserving training accuracy, and gives
further reductions by trading off accuracy. Compared to other baselines which
sometimes cause training to diverge, our method provides similar or better
memory reduction while avoiding divergence.Comment: Published in TML
Optimizing Mixture of Experts using Dynamic Recompilations
The Mixture of Experts architecture allows for outrageously large neural
networks by scaling model parameter size independently from computational
demand (FLOPs). However, current DNN frameworks cannot effectively support the
dynamic data flow in Mixture of Experts, and implementations on top of these
frameworks need to use workarounds that introduce significant overheads. To
address the limitation of these frameworks, we present DynaMoE, a DNN library
that uses dynamic recompilations to optimize and adapt the use of computational
resources to the dynamic needs of Mixture of Experts models. Our evaluation
shows that DynaMoE achieves a 1.8x speedup and supports 2.3x larger model sizes
when compared to existing MoE systems, even when not using recompilations. We
then present further optimizations enabled by dynamic recompilations that yield
an additional 1.7x speedup while simultaneously reducing memory pressure and
improving model quality.Comment: 13 pages, 15 figure
Collaboration Versus Cheating
We outline how we detected programming plagiarism in an introductory online
course for a master's of science in computer science program, how we achieved a
statistically significant reduction in programming plagiarism by combining a
clear explanation of university and class policy on academic honesty reinforced
with a short but formal assessment, and how we evaluated plagiarism rates
before SIGand after implementing our policy and assessment.Comment: 7 pages, 1 figure, 5 tables, SIGCSE 201
Eventually Sound Points-To Analysis with Specifications
Static analyses make the increasingly tenuous assumption that all source code is available for analysis; for example, large libraries often call into native code that cannot be analyzed. We propose a points-to analysis that initially makes optimistic assumptions about missing code, and then inserts runtime checks that report counterexamples to these assumptions that occur during execution. Our approach guarantees eventual soundness, which combines two guarantees: (i) the runtime checks are guaranteed to catch the first counterexample that occurs during any execution, in which case execution can be terminated to prevent harm, and (ii) only finitely many counterexamples ever occur, implying that the static analysis eventually becomes statically sound with respect to all remaining executions. We implement Optix, an eventually sound points-to analysis for Android apps, where the Android framework is missing. We show that the runtime checks added by Optix incur low overhead on real programs, and demonstrate how Optix improves a client information flow analysis for detecting Android malware
- …